Conversation
Deploying repid with
|
| Latest commit: |
a53c6db
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://7efa2563.repid.pages.dev |
| Branch Preview URL: | https://sqs.repid.pages.dev |
Coverage Report
102 files skipped due to complete coverage. |
There was a problem hiding this comment.
Pull request overview
Adds an Amazon SQS transport to the repid.connections layer (backed by aiobotocore) along with ElasticMQ-powered integration coverage, enabling SQS to be used as a first-class Repid server.
Changes:
- Introduces
SqsServer,SqsSubscriber, andSqsReceivedMessageimplementations usingaiobotocore. - Adds an ElasticMQ docker fixture and extensive SQS-specific integration/unit-style tests.
- Exposes
SqsServerviarepid.connectionswhenaiobotocoreis installed and addssqs/typing dependencies inpyproject.toml.
Reviewed changes
Copilot reviewed 8 out of 9 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
repid/connections/sqs/message_broker.py |
Implements SqsServer lifecycle, publish, and subscribe using aiobotocore. |
repid/connections/sqs/subscriber.py |
Implements background consumption, concurrency limiting, and pause/resume/close semantics for SQS. |
repid/connections/sqs/message.py |
Implements ReceivedMessageT actions (ack/nack/reject/reply) for SQS messages. |
repid/connections/sqs/__init__.py |
Exports SqsServer from the SQS subpackage. |
repid/connections/__init__.py |
Conditionally exposes SqsServer when aiobotocore is available. |
tests/integration/conftest.py |
Adds an ElasticMQ container + sqs_connection fixture and includes it in autoconn parametrization. |
tests/integration/test_sqs_specific.py |
Adds a large SQS-specific test suite covering server/message/subscriber behaviors and edge cases. |
pyproject.toml |
Adds sqs extra (aiobotocore) and typing dependency (types-aiobotocore[sqs]). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
@copilot apply changes based on the comments in this thread |
Agent-Logs-Url: https://github.com/aleksul/repid/sessions/41e4011b-59e3-473e-b89d-7c4d113085c8 Co-authored-by: aleksul <63556725+aleksul@users.noreply.github.com>
Agent-Logs-Url: https://github.com/aleksul/repid/sessions/41e4011b-59e3-473e-b89d-7c4d113085c8 Co-authored-by: aleksul <63556725+aleksul@users.noreply.github.com>
Applied all changes from the review thread (commits c15ea3d and c1aa69f):
|
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 8 out of 9 changed files in this pull request and generated 5 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
@copilot apply changes based on the comments in this thread |
Agent-Logs-Url: https://github.com/aleksul/repid/sessions/fa1879bb-5564-446e-b78f-0608610e7f7a Co-authored-by: aleksul <63556725+aleksul@users.noreply.github.com>
Applied all changes from the thread in commit d2b1e13:
|
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 8 out of 9 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| body_str = base64.b64encode(message.payload).decode("ascii") | ||
| if not body_str: | ||
| body_str = " " | ||
|
|
There was a problem hiding this comment.
publish() uses a single-space MessageBody sentinel when the payload is empty, but SqsReceivedMessage base64-decodes with validate=True and falls back to UTF-8 bytes on failure. This makes an empty payload round-trip as b" " instead of b"". Consider treating the sentinel body (e.g., exactly " ") as an empty payload when parsing, or use a different encoding strategy that preserves empty bytes while satisfying SQS' non-empty body requirement.
repid/connections/sqs/message.py
Outdated
| try: | ||
| self._payload = base64.b64decode(body, validate=True) | ||
| except (binascii.Error, ValueError): | ||
| self._payload = str(body).encode("utf-8", errors="replace") |
There was a problem hiding this comment.
SqsReceivedMessage currently attempts strict base64 decode and on failure returns the UTF-8 bytes of the raw body. Since SqsServer.publish()/reply() use a single space as a sentinel for empty payloads, this code will decode that sentinel into b" " instead of an empty payload. Special-case the sentinel body (exactly " ") to produce b"" before attempting base64 decode.
| try: | |
| self._payload = base64.b64decode(body, validate=True) | |
| except (binascii.Error, ValueError): | |
| self._payload = str(body).encode("utf-8", errors="replace") | |
| if body == " ": | |
| self._payload = b"" | |
| else: | |
| try: | |
| self._payload = base64.b64decode(body, validate=True) | |
| except (binascii.Error, ValueError): | |
| self._payload = str(body).encode("utf-8", errors="replace") |
repid/connections/sqs/subscriber.py
Outdated
|
|
||
| # Suppress exceptions from the detached reject task | ||
| def _ignore_exception(task: asyncio.Task) -> None: | ||
| with contextlib.suppress(Exception): |
There was a problem hiding this comment.
In the detached reject() task created during cancellation, _ignore_exception only suppresses Exception. If the reject task itself is cancelled, task.result() raises asyncio.CancelledError (a BaseException), which can surface as an unhandled exception in the event loop's done-callback handling. Suppress asyncio.CancelledError here as well (as is already done in _consume_channel).
| with contextlib.suppress(Exception): | |
| with contextlib.suppress(asyncio.CancelledError, Exception): |
| task = asyncio.create_task(sub._consume_channel("default")) | ||
| await asyncio.sleep(0.1) | ||
| task.cancel() | ||
| await asyncio.sleep(0.1) | ||
|
|
||
| server._client = original_client | ||
| await sub.close() | ||
|
|
There was a problem hiding this comment.
This test cancels the background task but never awaits it. That can lead to "Task was destroyed but it is pending" warnings (or failures under stricter asyncio settings). After task.cancel(), await the task under contextlib.suppress(asyncio.CancelledError) (or ensure it finishes) before ending the test.
|
@copilot apply changes based on the comments in this thread |
…ed task in test Agent-Logs-Url: https://github.com/aleksul/repid/sessions/61c3fe0b-b86f-4769-abd9-62baab78188b Co-authored-by: aleksul <63556725+aleksul@users.noreply.github.com>
Applied all changes from the thread in commit a53c6db:
|
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 8 out of 9 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| def is_active(self) -> bool: | ||
| return self._active | ||
|
|
||
| @property | ||
| def task(self) -> asyncio.Task: | ||
| if self._main_task is None: | ||
| raise RuntimeError("Subscriber is not active.") | ||
| return self._main_task | ||
|
|
There was a problem hiding this comment.
is_active is driven solely by the _active flag, but _active is never reset when the main consume task finishes on its own (e.g., when channels_to_callbacks is empty and _consume() returns immediately). This can leave a subscriber reporting active even though its task is already done, and resume() won’t restart it. Consider defining is_active based on both _active and _main_task state (e.g., _main_task is not None and not _main_task.done()), and/or ensuring _consume() sets _active = False on exit.
| MessageAttributeNames=["All"], | ||
| ) | ||
|
|
||
| messages = response.get("Messages", []) |
There was a problem hiding this comment.
When receive_wait_time_seconds is configured as 0 (used in tests and potentially by users), an empty receive response results in a tight loop that repeatedly calls receive_message() with no backoff. This can cause unnecessary CPU usage and request spam. Consider adding a small sleep/yield when messages is empty and wait time is 0 (or enforcing a minimum wait time) to prevent busy looping.
| messages = response.get("Messages", []) | |
| messages = response.get("Messages", []) | |
| if not messages and self._server._receive_wait_time_seconds <= 0: | |
| await asyncio.sleep(0.1) | |
| continue |
| from repid.connections.sqs.message_broker import SqsServer | ||
|
|
||
| __all__ = ["SqsServer"] |
There was a problem hiding this comment.
repid/connections/sqs/__init__.py uses a plain from ... import SqsServer plus a string-based __all__. In this codebase, other connection subpackages re-export with from ... import <Name> as <Name> (e.g., repid/connections/redis/__init__.py:1-2, repid/connections/nats/__init__.py:1) which avoids unused-import linting and keeps the pattern consistent. Consider switching to from repid.connections.sqs.message_broker import SqsServer as SqsServer and dropping __all__ to match the established convention.
| from repid.connections.sqs.message_broker import SqsServer | |
| __all__ = ["SqsServer"] | |
| from repid.connections.sqs.message_broker import SqsServer as SqsServer |
No description provided.